*Note: Android SDK and Java JDK (when developing for Android) - have to be ticked in the installation modules when installing Unity.*
→ More Information on execution order of events in unity
→ More information about AR Foundation
Scenes: https://docs.unity3d.com/Manual/CreatingScenes.html
Game Objects: https://docs.unity3d.com/ScriptReference/GameObject.html
Prefabs: https://docs.unity3d.com/Manual/Prefabs.html
Packages: https://docs.unity3d.com/Manual/PackagesList.html
Button
A Button has an OnClick UnityEvent to define what it will do when clicked.
Slider
A Slider has a decimal number Value that the user can drag between a minimum and maximum value. It can be either horizontal or vertical. It also has a OnValueChanged UnityEvent to define what it will do when the value is changed.
Raycasters Raycasters are used for figuring out what the pointer is over
AR Raycast Manager Also known as hit testing, ray casting allows you to determine where a ray (defined by an origin and direction) intersects with a trackable. A “trackable” is a feature in the physical environment that can be detected and tracked by an XR device.
[SerializeField]
ARRaycastManager m_RaycastManager;
List<ARRaycastHit> m_Hits = new List<ARRaycastHit>();
void Update()
{
if (Input.touchCount == 0)
return;
if (m_RaycastManager.Raycast(Input.GetTouch(0).position, m_Hits))
{
// Only returns true if there is at least one hit
}
}
Physics Raycaster
Used for 3D physics elements. Casts a ray against all colliders in the Scene and returns detailed information on what was hit. This example reports the distance between the current object and the reported Collider:
public class RaycastExample : MonoBehaviour
{
void FixedUpdate()
{
RaycastHit hit;
if (Physics.Raycast(transform.position, -Vector3.up, out hit))
print("Found an object - distance: " + hit.distance);
}
}
Variables hold values and references to objects. Variables start with a lowercase letter. When Unity compiles the script, it makes public variables visible in the editor.
Functions are collections of code that compare and manipulate these variables. Functions start with an uppercase letter.
Start Function Start will be called if a GameObject is active, but only if the component is enabled.
Update Function is called once per frame. This is where you put code to define the logic that runs continuously, like animations, AI, and other parts of the game that have to be constantly updated.
Edit Mode
Delete Mode
Tap once on an object to delete it immediately
Menu
When pressed, new buttons pop up.
Visibility Button Turns on and off the visibility of the 3D printed object
Reset Button Resets all scanned planes and deletes all objects in Scene.
Play Button Starts animating the characters.
Interactive Sunlight
In short, we will import 2 interactive sliders to manipulate the Sunlight, by changing the Altitude and the Azimuth. By changing these values, we are in fact iterating through seasons and hours of the day.
Import UnityPackage
Since we imported new Prefabs and we want to incorporate them in our scene, we have to do some necessary steps to Relink some dependencies for Scripts and Buttons.
Since we imported Instantiator again, we have to re-link the previous GameObject, where needed.
AR Canvas - New Buttons
Note: Notice that it is renamed to AR_Canvas(1). This happens because the Prefab has the same name as the Menu. We will take the Children we need and import them in our existing AR_Canvas.
Respectively, unpack completely the AR_Canvas(1).
Delete the existing Reset button (there is already one included in the new Menu).
Light
Delete Previous Light
We will import a new Light in the Scene, so let’s delete the existing Directional Light in the Hierarchy.
Import as GameObject
We will now drag and drop the Directional Light Prefab from the Asset Folder to the Hierarchy.
Unpack Prefab Completely
Right Click on the Directional Light > Prefab> Unpack Completely. This will “detach” the link between the GameObject and the Prefab.
Link Sliders to the Script
Drag and drop the Azimuth and Altitude slider (children in AR Canvas) to the Sun Script in the Inspector. Drag and drop the parent of the 3d object (houseParent).
You should be able to see the newly added UI Menu on the Right.
Go to the new Menu GameObject that was imported. On the Inspector, find the MenuScript (C# Script component), double click and open the code.
Script Overview
public class Menuscript : MonoBehaviour
{
//variables
public GameObject Shadow_Button;
public GameObject Reset_Button;
public GameObject Animation_Button;
// Start is called before the first frame update
void Start()
{
//For each button, define OnClick Action and prefab
Button btn = GetComponent<Button>();
btn.onClick.AddListener(Menu_Toggle);
}
//Toggle ON and OFF the dropdown submenu options
private void Menu_Toggle()
{
//deactivate the buttons if they are on
if (Shadow_Button.activeSelf == true)
{
Shadow_Button.SetActive(false);
Reset_Button.SetActive(false);
Animation_Button.SetActive(false);
}
else
{
Shadow_Button.SetActive(true);
Reset_Button.SetActive(true);
Animation_Button.SetActive(true);
}
}
}
→ Here the “AddListener” redirects to the Menu_Toggle() function. This is a computationally efficient way of checking if a button is clicked to execute a function. → This Menu Script turns on and off different buttons. This allows us to create a “pop up” Menu that includes multiple buttons, and make the UI Experience more compact.
Go to the Instantiator C# Script.
How we set a mode
public void SetMode_A()
{
mode = 0; // for single placement of objects, like the 3D printed house hologram
}
public void SetMode_B()
{
mode = 1; // for multiple placement of objects, like multiple trees or characters
}
public void SetMode_C()
{
mode = 2; // for editing existing objects
}
public void SetMode_D()
{
mode = 3; // for deleting objects
}
How these modes are used
private void _InstantiateOnTouch()
{
if (Input.touchCount > 0) //if there is an input..
{
if (PhysicRayCastBlockedByUi(Input.GetTouch(0).position))
{
if (mode == 0) // ADD ONE : destroy previous object with every tap
{
Debug.Log("***MODE 0***");
Touch touch = Input.GetTouch(0);
// Handle finger movements based on TouchPhase
switch (touch.phase)
{
case TouchPhase.Began:
if (Input.touchCount == 1)
{
_PlaceInstant(houseParent);
}
break; //break: If this case is true, it will not check the other ones. More computational efficiency,
case TouchPhase.Moved:
// Record initial touch position.
if (Input.touchCount == 1)
{
_Rotate(ARObject_new);
}
if (Input.touchCount == 2)
{
_PinchtoZoom(ARObject_new);
}
break;
case TouchPhase.Ended:
Debug.Log("Touch Phase Ended.");
break;
}
}
else if (mode == 1) //ADD MULTIPLE : create multiple instances of object
{
Debug.Log("***MODE 1***");
Touch touch = Input.GetTouch(0);
// Handle finger movements based on TouchPhase
switch (touch.phase)
{
case TouchPhase.Began:
if (Input.touchCount == 1)
{
_PlaceInstant(objectParent);
}
break;
case TouchPhase.Moved:
// Record initial touch position.
if (Input.touchCount == 1)
{
_Rotate(ARObject_new);
}
if (Input.touchCount == 2)
{
_PinchtoZoom(ARObject_new);
}
break;
case TouchPhase.Ended:
Debug.Log("Touch Phase Ended.");
break;
}
}
else if (mode == 2) //EDIT MODE
{
Debug.Log("***MODE 2***");
_EditMode();
}
else if (mode == 3) //DELETE MODE
{
Debug.Log("***MODE 3***");
activeGameObject = SelectedObject();
_DestroySelected(activeGameObject);
}
else
{
Debug.Log("Press a button to initialize a mode");
}
}
}
}
EditMode() Function
mode==2
private void _EditMode()
{
if (Input.touchCount == 1) //try and locate the selected object only when we click, not on update
{
activeGameObject = SelectedObject();
}
if (activeGameObject != null) //change the pinch and zoom place holder only when we locate a new object
{
temporaryObject = activeGameObject;
_addBoundingBox(temporaryObject); //add bounding box around selected game object
}
_Move(temporaryObject);
_PinchtoZoom(temporaryObject);
}
Find Selected Object by RayCasting
Note: This This function Returns an object (the activeGameObject), when the Raycast hits the Physics collider of that object.
private GameObject SelectedObject(GameObject activeGameObject = null)
{
Touch touch;
touch = Input.GetTouch(0);
//delete the previous selection boundary, will be replaced with a new one
if (Input.touchCount == 1 && touch.phase == TouchPhase.Ended)
{
Debug.Log("Single Touch");
List<ARRaycastHit> hits = new List<ARRaycastHit>();
rayManager.Raycast(touch.position, hits);
if (hits.Count > 0)
{
Debug.Log("Ray shooting from camera");
Ray ray = arCamera.ScreenPointToRay(touch.position);
RaycastHit hitObject;
//if our touch hits an existing object, we find that object
if (Physics.Raycast(ray, out hitObject))
{
//we make sure we didn't tap a plane
if (hitObject.collider.tag != "plane")
{
//setting the variable
Debug.Log("Selected object located");
activeGameObject = hitObject.collider.gameObject; //assign GameObject as the active
Debug.Log(activeGameObject);
}
}
}
}
In delete mode, we use the same script to locate the Raycast hit Object, and then instead of Moving, Rotating or Scaling, we just use the /Destroy()/ function.
else if (mode == 3) //DELETE MODE
{
Debug.Log("***MODE 3***");
activeGameObject = SelectedObject();
_DestroySelected(activeGameObject);
} ------------------------------------------------------------------------ ```
private void _DestroySelected(GameObject gameObjectToDestroy)
{
Destroy(gameObjectToDestroy);
}
→ Pro Tip: If you want to go directly to a function you see in the code, you can CTRL+ click on the name. (e.g. here we would CTRL+click on the _DestroySelected(activeGameObject))
In the new Directional Light we imported, there is a custom C# script attached named “Sun”.
Here, we link the position and rotation of our Sunlight according to the slider values we have on our UI Canvas, which we manipulate on the fly.
Also, we use this script to turn ON/OFF the visibility of our 3D printed object (that’s why we use the House Parent GameObject). Let’s take a look at the scripts.
Azimuth - Altitude
On line 40, we add listeners for everytime we change the slider for each parameter.
azimuth_slider.onValueChanged.AddListener(AdjustLatitude);
altitude_slider.onValueChanged.AddListener(AdjustLongitude);
On line 45, we assign these new values in the AdjustTime() function.
public void AdjustAzimuth(float value)
{
New_Azimuth = value;
AdjustTime(New_Azimuth, New_Altitude);
}
public void AdjustAltitude(float value)
{
New_Altitude = value;
AdjustTime(New_Azimuth, New_Altitude);
}
On line 69, we adjust the position of our 3D Sphere object, according to the Azimuth and Altitude values.
The centerpoint of this sphere, is the House (the 3D printed object)
if (house!=null)
{
coordPosition.x = radius*Mathf.Cos(New_Azimuth*Mathf.Deg2Rad)*Mathf.Cos(New_Altitude*Mathf.Deg2Rad);
coordPosition.z = radius*Mathf.Cos(New_Azimuth*Mathf.Deg2Rad)*Mathf.Sin(New_Altitude*Mathf.Deg2Rad);
coordPosition.y = radius*Mathf.Sin(New_Azimuth*Mathf.Deg2Rad);
coordPosition += centerpoint;
sun.transform.position = new Vector3(coordPosition.x, coordPosition.y, coordPosition.z);
sun.transform.LookAt(house.transform);
}
Note: We use the LookAt command, to rotate the sunlight, by making it “look” at our object each time it is moving.
In the same C# Script (Sun), we use the function VisibilityToggle() to be able to turn ON/OFF the visibility of the 3d model, while still keeping the shadows of it.
public void VisibilityToggle()
{
//**Preview ON/Off the house 3dmodel**
//Check if the house is instantiated
if (houseParent.transform.childCount != 0)
house = houseParent.transform.GetChild(0).gameObject;
if(house != null)
{
//Get access to the model obj and adjust the MeshRenderer parameters
GameObject obj = house.transform.GetChild(0).gameObject;
Debug.Log(obj);
if (shadowMode == 0)
{
obj.GetComponent<MeshRenderer>().shadowCastingMode = UnityEngine.Rendering.ShadowCastingMode.ShadowsOnly;
shadowMode = 1;
}
else
{
obj.GetComponent<MeshRenderer>().shadowCastingMode = UnityEngine.Rendering.ShadowCastingMode.On;
shadowMode = 0;
}
}
“Animanimals” Script
Go to the Prefabs, click on the Animated Characters
These downloaded Assets come with different Animations embedded in the prefab. This means that by assigning different functions, they can “switch” their animation to the desired preset. Our goal is to activate the “Walk” animation, so that our characters can walk around in the Augmented Reality environment.
Overview of Animanimals Script
Open the code by double clicking on the Animanimals Script. First, we locate the Button we need (Which is named “Animation_Button, in our AR Canvas) And then Add a Listener to it. When it is clicked, the ControllPLayer() void is called. This sets a value of either 1 or 0 (1 is for moving = “Walk”). When the “Animation_Button” button is clicked, the animation “Walk” is activated, and our characters start to move.
void Start()
{
anim = gameObject.GetComponent<Animator>();
rb = GetComponent<Rigidbody>();
//Button btn = Anim_Button.GetComponent<Button>();
btn = GameObject.Find("/AR_Canvas/Menu/Animation_Button").GetComponent<Button>(); //.GetComponent<Button>();
btn.onClick.AddListener(ControllPlayer);
}
void Update()
{
Debug.Log(move);
if (move)
{
Vector3 movement = transform.forward;
transform.Translate(movement * movementSpeed * Time.deltaTime, Space.World);
}
}
public void ControllPlayer()
{
Debug.Log("walk");
anim.SetInteger("Walk", 1);
if (move)
{
move = false;
}
else{
move = true;
}
Debug.Log("move");
}
Colliders
In order for our animations and placement of the objects to work, we have to make sure our imported Prefabs have Colliders and are Rigid Bodies.
Collider components define the shape of a GameObject for the purposes of physical collisions. A collider, which is invisible, does not need to be the exact same shape as the GameObject’s mesh.
→ Our cat has a Box Collider attached to it.
A rough approximation of the mesh is often more efficient and indistinguishable in gameplay.The simplest (and least processor-intensive) colliders are primitive collider types. In 3D, these are the Box Collider, Sphere Collider and Capsule Collider.
Mesh Collider (Component)
Documentation Here
The Mesh Collider builds its collision representation from the Mesh attached to the GameObject, and reads the properties of the attached Transform to set its position and scale correctly. The benefit of this is that you can make the shape of the Collider exactly the same as the shape of the visible Mesh for the GameObject, resulting in more precise and authentic collisions. However, this precision comes with a higher processing overhead than collisions involving primitive colliders (such as Sphere, Box, and Capsule) and so it is best to use Mesh Colliders sparingly.
Check if all of your prefabs have some sort of Collider.
Rigid Body (Component)
Rigid Body Properties
Mass The mass of the object (in kilograms by default). Drag How much air resistance affects the object when moving from forces. 0 means no air resistance, and infinity makes the object stop moving immediately. Angular Drag How much air resistance affects the object when rotating from torque. 0 means no air resistance. Note that you cannot make the object stop rotating just by setting its Angular Drag to infinity. Use Gravity If enabled, the object is affected by gravity. Is Kinematic If enabled, the object will not be driven by the physics engine, and can only be manipulated by its Transform. This is useful for moving platforms or if you want to animate a Rigidbody that has a HingeJoint attached.
→ The Rigidbody component is what makes our animals “fall”, or “move around”.
Now, it is time to continue developing your Augmented Reality environment setup! Good luck!